Skip to content

model predict: move getModel from rest to transport #3687

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jngz-es
Copy link
Collaborator

@jngz-es jngz-es commented Mar 26, 2025

Description

To prevent security issue, put index read behind transport call.

Related Issues

#3651

Check List

  • New functionality includes testing.
  • New functionality has been documented.
  • API changes companion pull request created.
  • Commits are signed per the DCO using --signoff.
  • Public documentation issue/PR created.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Jing Zhang <jngz@amazon.com>
};
MLPredictionTaskRequest predictionRequest = getRequest(
modelId,
Objects.requireNonNullElse(userAlgorithm, functionName.get().name()),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible both userAlgorithm and functionName are null?

});
try (ThreadContext.StoredContext context = client.threadPool().getThreadContext().stashContext()) {
modelManager
.getModel(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After refactoring, this method will not be invoked, is that correct ?

@jngz-es
Copy link
Collaborator Author

jngz-es commented Mar 26, 2025

Got a test failure as below, looking into it.

RestMLRemoteInferenceIT > testPredictWithAutoDeployAndTTL_RemoteModel FAILED
    org.opensearch.client.ResponseException: method [POST], host [http://127.0.0.1:50334/], URI [/_plugins/_ml/models/lvvV05UBs7bCuZJ1kON-/_predict], status line [HTTP/1.1 500 Internal Server Error]
    {"error":{"root_cause":[{"type":"no_such_element_exception","reason":"No value present"}],"type":"no_such_element_exception","reason":"No value present"},"status":500}
        at __randomizedtesting.SeedInfo.seed([93AC97C5F10084B1:2630A5DEADA1287B]:0)
        at app//org.opensearch.client.RestClient.convertResponse(RestClient.java:501)
        at app//org.opensearch.client.RestClient.performRequest(RestClient.java:384)
        at app//org.opensearch.client.RestClient.performRequest(RestClient.java:359)
        at app//org.opensearch.ml.utils.TestHelper.makeRequest(TestHelper.java:184)
        at app//org.opensearch.ml.utils.TestHelper.makeRequest(TestHelper.java:157)
        at app//org.opensearch.ml.utils.TestHelper.makeRequest(TestHelper.java:146)
        at app//org.opensearch.ml.rest.RestMLRemoteInferenceIT.predictRemoteModel(RestMLRemoteInferenceIT.java:1120)
        at app//org.opensearch.ml.rest.RestMLRemoteInferenceIT.testPredictWithAutoDeployAndTTL_RemoteModel(RestMLRemoteInferenceIT.java:243)

Comment on lines +282 to +285
} else if (FunctionName.isDLModel(FunctionName.from(modelType.toUpperCase(Locale.ROOT)))
&& !mlFeatureEnabledSetting.isLocalModelEnabled()) {
throw new IllegalStateException(LOCAL_MODEL_DISABLED_ERR_MSG);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: redundant else if, can be a separate if block

Comment on lines -146 to +105
if (FunctionName.REMOTE.name().equals(modelType) && !mlFeatureEnabledSetting.isRemoteInferenceEnabled()) {
throw new IllegalStateException(REMOTE_INFERENCE_DISABLED_ERR_MSG);
} else if (FunctionName.isDLModel(FunctionName.from(modelType.toUpperCase(Locale.ROOT)))
&& !mlFeatureEnabledSetting.isLocalModelEnabled()) {
throw new IllegalStateException(LOCAL_MODEL_DISABLED_ERR_MSG);
} else if (ActionType.BATCH_PREDICT == actionType && !mlFeatureEnabledSetting.isOfflineBatchInferenceEnabled()) {
if (ActionType.BATCH_PREDICT == actionType && !mlFeatureEnabledSetting.isOfflineBatchInferenceEnabled()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants